Lorenzo Rosasco
Massachusetts Institute of Technology and Istituto Italiano di Tecnologia
(http://web.mit.edu/lrosasco/www/)
Monday 12 March 2012
12 - 1pm
B10 Seminar Room, Basement,
Alexandra House, 17 Queen Square, London, WC1N 3AR
Learning Functions and (Data) Sets with Spectral Regularization
Abstract:
Stability and regularization are often the key to successfully learn from sampled/noisy data. In this talk we show how different paradigms, beyond penalized empirical risk minimization, can be used to design learning algorithms that ensure stability and hence generalization. We first illustrate our approach for supervised learning (scalar or vector valued) and then for the unsupervised problem of estimating the set around which the data are concentrated. The different algorithms can be described within a unified framework, namely spectral regularization.Their analysis combines classical concepts from physics and signal processing with tools from high dimensional probability, such as concentration of measure. Our study highlights connections between numerical and statistical stability.
Bio:
Lorenzo Rosasco is assistant professor at the Computer Science Department of the University of Genova, Italy, and currently on leave of absence at the Massachusetts Institute of Technology (MIT) where he is team leader of the IIT@MIT lab, a joint laboratory between the Istituto Italiano di Tecnologia (IIT) and MIT. He has received his PhD from the University of Genova in 2006 where he worked under the supervision of Alessandro Verri and Ernesto De Vito in the SLIPGURU group. He was a visiting student with Tomaso Poggio at the Center for Biological and Computational Learning (CBCL) at MIT, with teve Smale at the Toyota Technological Institute at Chicago (TTI-Chicago) and with Sergei Pereverzev at the Johann Radon Institute for Computational and Applied Mathematics. Between 2006 and 2009 he has been postdoctoral fellow at CBCL working with Tomaso Poggio. His research focuses on studying theory and algorithms for computational learning where he has developed and analyzed methods to learn from small as well as large samples of high dimensional data, using analytical and probabilistic tools.